Helm Charts

Helm Charts will be available for every release build in the Gigamon software portal. To get started, download the respective UCT-C release build from the repository and untar the gigamon-gigavue-uctc-helm-6.11.tgz file. After untarring the file, navigate to the Helm folder and further update the fields mentioned in Deploy using Helm Chart before deployment.

Notes:
■   Contact Technical Support or Contact Sales for information on downloading the respective UCT-C build from the Gigamon software portal.
■   Support for two Helm Charts is deprecated from software version 6.7.00.

Using Helm with an External values.yaml File:

To deploy the chart in an external file, follow the below steps:

  1. Download the original Helm chart from Gigamon software portal, either locally or to a remote Helm chart repository.

  2. Untar/zip the chart, then copy the original values.yaml file to a new file named 'myvalues.yaml'.

  3. Edit the myvalues.yaml file with the necessary changes as mentioned in Deploy using Helm Chart.

Use the below deployment command to install the chart:

helm install uctc -f myvalues.yaml gigamon-gigavue-uctc-helm-6.11.tgz -n uct-ns

Deploy using Helm Chart

You can deploy UCT-C Controller and TAPs using Helm Chart. Follow the steps listed below to deploy the solution.

  1. Use the command below to Unzip and Untar the .tgz file:

gunzip <name of the UCT-C .tgz file>

tar -xvf <name of the UCT-C .tar file>

After extracting the tar file, navigate to the Helm folder in the newly created uctc-<image version>-<build number> folder and update the details given in the below steps.

  1. Update the imagePullSecrets, namespace, GigaVUE-FM IP, external load balancer IP, and Kubernetes API URL in the following section of the values.yaml file present in the UCT-C directory.

imagePullSecrets: [{name: secret}]

namespace: uctc

If only an IPv4 address is provided and IPv6 is not configured, GigaVUE-FM will use the IPv4 address exclusively for communication.

fm_ip: "<FM IPv4>"

If only an IPv6 address is provided and fm_ip is not configured, GigaVUE-FM will use the IPv6 address for communication.

fm_ipv6: "<FM IPv6>"

If both IPv4 and IPv6 are provided, UCTC_CNTLR_FM_IP_CONFIG will be used to choose the preferred IP stack.

ext_load_balancer: "<FM IPv4 or FM IPv6 or FM IPv4,FM IPv6(both with comma-separation notation>"

Refer to the below examples:

# example1: 192.168.0.10 (IPv4)

# example2: 2001:db8:abcd:ef01::5 (IPv6)

# example3: 192.168.0.10,2001:db8:abcd:ef01::5 (IPv4,IPv6)

k8s_cluster_url: "<url>"

# example: https://10.10.10.12:6443

  1. If you provide two IPs, IPv4 and IPv6 in the fm_ip argument, the following configurations will be used:

# values: <IPv4 | IPv6>

uctc_tap_ip_config: "IPv4"

# values: <true | false>

uctc_tap_fallback_config: "false"

IP CONFIG option allows the user to provide the preferred IP version. If the user does not provide any value, the default value IPv4 will be used.

When the preferred IP version fails to connect (example: IPv6), the FALLBACK CONFIG will be used to connect to the other available IP version (example: IPv4). The default value, True, will be used to consider the Fallback mechanism.

Notes:
■   Fallback configuration will be used during the node registration phase only.
■   Controller FM IP and FALLBACK configurations will be used only if you provide both IPv4 and IPv6.
■   Default values will be used if you do not provide any options.
  1. Edit the following volumeMounts as per your container Runtime:

crisocketvolume:

mountPath: /var/run/containerd/containerd.sock

name: socket

The socket location for commonly used CRIs are as follows:

docker - /var/run/docker.sock

containerd - /var/run/containerd/containerd.sock

cri-o - /var/run/crio/crio.sock

  1. Run the below command in the location where the UCT-C folder is present.

helm install uctc /uctc -n <Namespace>

Note:  Users can skip the steps 1-5 and use the below command to directly deploy UCT-C Controller and TAPs using Helm Chart.
helm install uctc -n uctc ./uctc --set namespace=uctc --set serviceAccount.name=test --
set imagePullSecrets[0].name=gigamon --

set uctcController.fm_ip=x.x.x.x --set uctcController.ext_load_balancer=x.x.x.x --set uctcController.k8s_cluster_url=https://x.x.x.x:6443 --set uctcController.uctc_cntlr_fm_ip_config=IPv4 --set
uctcTap.uctc_tap_ip_config=IPv4 --set uctcTap.cri_socket_path=/run/containerd/containerd.sock

Validate UCT-C Deployment

To validate the UCT-C deployment and check for any failures, set the validation value to “True” in values.yaml file. Two pods, uctc-prevalidator-pod and uctc-postvalidator-pod will be deployed, which will perform checks to ensure certain conditions are met for a successful deployment. If all the checks pass, the validator pods will clean up automatically, and UCT-C will be deployed successfully. However, if any check fails, the respective validator pod (either uctc-prevalidator-pod or uctc-postvalidator-pod) will remain in an error state, and the Helm installation will fail.

Follow the steps listed below if the installation fails due to a validation error:

1.   Check Logs: To review the logs of the failure pod and identify the issue, use the following command and specify the corresponding failed pod name.

kubectl logs < uctc-prevalidator-pod | uctc-postvalidator-pod > -n uctc

2. Delete the Helm Release: Use the following command to delete the failed Helm release.

helm uninstall my-release -n uctc

3. Delete the Error Pod: After you identify the cause of failure, you can delete the failed validator pod (uctc-prevalidator-pod or uctc-postvalidator-pod depending on which pod has failed) and re-run the Helm installation. Use the command below to delete the failed validator pod.

kubectl delete pod <uctc-prevalidator-pod | uctc-postvalidator> -n uctc

4. Run the command below in the location where the UCT-C folder is present.

helm install uctc /uctc -n <Namespace>

Note:  If there are intermittent connectivity issues between GigaVUE-FM and the cluster, set 'validation' to false before installing the Helm chart to avoid false negative failures.

The following table lists the configurable parameters and their default values defined in the values.yaml file.

UCT-C Controller configuration

Parameter

Description

Default Value

uctcController.image.repository

UCT-C Controller Docker image repository.

gigamon/gigamon-gigavue-uctc-cntlr

uctcController.image.tag

UCT-C Controller Docker image tag.

XXX_IMAGE_TAG_XXX

uctcController.nameOverride

This value will override the default resource's name generated by the chart's templates.

uctc-cntlr

uctcController.fullnameOverride

The provided name will combine with the default resource's name.

 

uctcController.podAnnotations

Annotations can be added based on the user requirements.

 

uctcController.service.name

Name of the UCT-C Controller Service to be created.

uctc-cntlr-service

uctcController.service.type

Type of the service to be created.

ClusterIP

uctcController.fm_ip

IPv4 address of the GigaVUE-FM.

If only IPv4 is provided and fm_ipv6 is not specified, GigaVUE-FM will default to using IPv4 for communication. If both fm_ipv4 and fm_ipv6 are provided, the preferred IP stack will be selected based on the configuration specified in uctc_cntlr_fm_ip_config.

 

uctcController.fm_ipv6

IPv6 address of the GigaVUE-FM.

If only IPv6 is provided and fm_ip is not specified, GigaVUE-FM will default to using IPv6 for communication. If both fm_ipv4 and fm_ipv6 are provided, the preferred IP stack will be determined based on the uctc_cntlr_fm_ip_config setting.

 

uctcController.uctc_svc_rest_port

Port at which UCT-C Controller listens for GigaVUE-FM request.

8443

uctcController.fm_svc_rest_port

Port at which GigaVUE-FM listens for UCT-C Controller registration message.

443

uctcController.k8s_cluster_url

K8S cluster end-point (typically, master nodes with the default port of 6443).

 

uctcController.uctc_cntlr_fm_ip_config

User's preferred IP stack for communication between the UCT-C Controller and a dual-stack GigaVUE-FM.

IPv4

uctcController.uctc_cntlr_fallback_config

If enabled as 'True,' the uctc_cntlr_fm_ip_config will utilize the alternative IP stack if the preferred one is unavailable.

True

uctcController.uctc_cntlr_inventory_batch_sz

This configuration is used to configure UCT-C inventory batch size in terms of number of pods.

25

uctcController.resources.limits.cpu

Expressed in CPU units, it represents the maximum processing power that the UCT-C controller pod is allowed to use.

100m

uctcController.resources.limits.memory

Expressed in Gibibytes, it represents the maximum amount of RAM that the UCT-C controller pod is allowed to consume.

256Mi

uctcController.resources.requests.cpu

Expressed in CPU units, it represents the minimum processing power that the UCT-C controller pod needs.

100m

uctcController.resources.requests.memory

Expressed in Gibibytes, it represents the minimum amount of RAM that the UCT-C controller pod needs.

256Mi

uctcController.nodeSelector

Node labels can be specified to target specific nodes in the Kubernetes cluster where the UCT-C Controller pod should be scheduled.

 

uctcController.tolerations

Toleration’s can be specified to tolerate specific taints on nodes.

 

uctcController.affinity

Affinity rules can be specified to place UCT-C Controller pod on nodes based on complex rules.

 

Note:  The controller includes a service port that is now provided to FM. By default, the controller pod continues to listen on port 8443, as defined by the command parameter. If there is a requirement to expose the service on a different port, such as 443, this can be configured in the Service YAML. Kubernetes will manage the redirection, allowing external access through the specified port (443), while the controller internally continues to operate on port 8443.

Example: Sample Configuration in the Service YAML:

namespace: uct

spec:

ports:

port: 443

protocol: TCP

name: uct-rest

targetPort: 8443

 

UCT-C Tap configuration

Parameter

Description

Default Value

uctcTap.image.repository

UCT-C Tap Docker image repository.

gigamon/gigamon-gigavue-uctc-tap

uctcTap.image.tag

UCT-C Tap Docker image tag.

XXX_IMAGE_TAG_XXX

uctcTap.nameOverride

This value will override the default resource's name generated by the chart's templates.

 

uctcTap.fullnameOverride

This value will combine with the default resource's name.

 

uctcTap.podAnnotations

Annotations to be added to the pod.

 

uctcTap.podSecurityContext

Security context to be added to the pod.

 

uctcTap.uctc_tap_ip_config

User's preferred IP stack for communication between the UCT-C Controller and UCT-C Tap.

IPv4

uctcTap.uctc_tap_fallback_config

If enabled as 'True,' the uctc_tap_ip_config will utilize the alternative IP stack if the preferred one is unavailable.

True

uctcTap.uctc_cntlr_inventory_batch_sz

This configuration is used to configure UCT-C inventory batch size in terms of the number of pods.

25

uctcTap.resources.limits.cpu

Expressed in CPU units, it represents the maximum processing power that the UCT-C Tap pod is allowed to use.

1

uctcTap.resources.limits.memory

Expressed in mebibytes, it represents the maximum amount of RAM that the UCT-C Tap pod is allowed to consume.

1Gi

uctcTap.resources.requests.cpu

Expressed in CPU units, it represents the minimum processing power that the UCT-C Tap pod needs.

1

uctcTap.resources.requests.memory

Expressed in mebibytes, it represents the minimum amount of RAM that the UCT-C Tap pod needs.

1Gi

uctcTap.cri_socket_path

Key-in the container run-time specific socket path for, for example for cri-o -> "/run/crio/crio.sock" containerd -> /run/containerd/containerd.sock and docker -> /var/run/docker.sock.

 

uctcTap.nodeSelector

Node labels can be specified to target specific nodes in the Kubernetes cluster where the UCT-C Tap pod should be scheduled.

 

uctcTap.tolerations

Toleration’s can be specified to tolerate specific taints on nodes.

 

uctcTap.affinity

Affinity rules can be specified to place UCT-C Tap pod on nodes based on complex rules.

 

Common Configuration

Parameter

Description

Default Value

namespace

Namespace in which the UCT-C components are to be deployed.

uct

serviceAccount.create

Specifies whether a service account should be created.

FALSE

serviceAccount.annotations

Annotations to add to the service account.

 

serviceAccount.name

Name of the service account if uctcTap.serviceAccount.create is set to true.

gigamon

imagePullSecrets

List of image pull secrets for private registries.

gigamon (customize as needed)

ingress.enabled

Specify if an Ingress controller is already installed. Setting this to 'true' will create an ingress resource.

TRUE

ingress.annotations

Annotations to be added to the ingress.

 

ingress.annotations.kubernetes.io/ingress.class

Class name of the ingress controller to be used.

nginx

debugmode

Specified in the hex form of 0x00[aaaa][b][c] where aaaa is the number of pcap messages to maintain before rollover, b can have the value 0 or 1 where 0=do not create pcap or 1=create pcap, and c can have the value between 1 to 4 where 1=fatal, 2=error, 3=info, 4=debug.

0x0A000003

securityContextConstraints.create

If the platform is OpenShift, set securityContextConstraints.create to true. This will create a custom Security Context Constraint (SCC) with the required permissions for the UCT-C solution.

Note:  When using privileged ports, ensure to launch the controller with root privileges.

FALSE

securityContextConstraints.name

Name of the Security context constraints (SCC) to be created.

gigamon

validation

Setting this to true will deploy a pre-validator pod before and a post-validator pod after the deployment of the UCT-C solution.

TRUE

You should specify each parameter using the --set key=value argument with the helm install command. You can customize the Helm chart by modifying the values in the values.yaml file or by using the --set flag with the helm install command.